我们为正规化优化问题$ g(\ boldsymbol {x}) + h(\ boldsymbol {x})$提供了有效的解决方案,其中$ \ boldsymbol {x} $在单位sphere $ \ vert \ vert \ boldsymbol { x} \ vert_2 = 1 $。在这里$ g(\ cdot)$是lipschitz连续梯度的平稳成本)$通常是非平滑的,但凸出并且绝对同质,\ textit {ef。,}〜规范正则化及其组合。我们的解决方案基于Riemannian近端梯度,使用我们称为\ textIt {代理步骤}}的想法 - 一个标量变量,我们证明,与间隔内的实际步骤大小相对于实际的步骤。对于凸面和绝对均匀的$ h(\ cdot)$,替代步骤尺寸存在,并确定封闭形式中的实际步骤大小和切线更新,因此是完整的近端梯度迭代。基于这些见解,我们使用代理步骤设计了Riemannian近端梯度方法。我们证明,我们的方法仅基于$ g(\ cdot)$成本的线条搜索技术而收敛到关键点。提出的方法可以用几行代码实现。我们通过应用核规范,$ \ ell_1 $规范和核谱规则正规化来显示其有用性。这些改进是一致的,并得到数值实验的支持。
translated by 谷歌翻译
广义procrustes分析(GPA)是通过估计转换将多种形状带入共同参考的问题。 GPA已广泛研究了欧几里得和仿射转化。我们引入了具有可变形转换的GPA,这形成了一个更广泛和困难的问题。我们专门研究了称为线性基扭曲(LBW)的一类转换,该转换包含仿射转换和大多数常规变形模型,例如薄板样条(TPS)。具有变形的GPA是一个无凸的不受限制问题。我们使用两个形状约束来解决可变形GPA的基本歧义,这需要形状协方差的特征值。这些特征值可以独立计算为先验或后部。我们根据特征值分解给出了可变形GPA的封闭形式和最佳解决方案。该解决方案处理正则化,有利于平滑的变形场。它要求转换模型满足自由翻译的基本属性,该译本断言该模型可以实施任何翻译。我们表明,幸运的是,对于大多数常见的转换模型,包括仿射模型和TPS模型,这一属性是正确的。对于其他模型,我们为GPA提供了另一种封闭式解决方案,该解决方案与自由翻译模型的第一个解决方案完全吻合。我们提供用于计算解决方案的伪代码,导致提出的DEFPA方法,该方法快速,全球最佳且广泛适用。我们验证了我们的方法并将其与以前的六个不同2D和3D数据集的工作进行比较,并特别注意从交叉验证中选择超参数。
translated by 谷歌翻译
各种优化问题采用最​​小规范优化的形式。在本文中,我们研究了两个逐步构建的最小规范优化问题之间最佳值的变化,第二个测量值包括新的测量。我们证明了一个精确的方程式来计算线性最小规范优化问题中最佳值的变化。通过本文的结果,可以将最佳值的更改预先计算为指导在线决策的指标,而无需解决第二个优化问题,只要解决了第一个优化问题的解决方案和协方差。该结果可以扩展到线性最小距离优化问题,并通过线性化对(非线性)平等约束进行非线性最小距离优化。本文中的这一推论为RA-L 2018 Bai等人所示的经验观察提供了理论上的自我解释。作为另一个贡献,我们提出了另一个优化问题,即以给定姿势对齐两个轨迹,以进一步演示如何使用度量标准。用数值示例验证了度量的准确性,这通常是令人满意的(请参阅RA-L 2018 Bai等人}中的实验),除非在某些极其不利的情况下。最后但并非最不重要的一点是,通过提议的度量计算最佳值的速度至少要比直接解决相应的优化问题快一点。
translated by 谷歌翻译
深度学习(DL)模型为各种医学成像基准挑战提供了最先进的性能,包括脑肿瘤细分(BRATS)挑战。然而,局灶性病理多隔室分割(例如,肿瘤和病变子区)的任务特别具有挑战性,并且潜在的错误阻碍DL模型转化为临床工作流程。量化不确定形式的DL模型预测的可靠性,可以实现最不确定的地区的临床审查,从而建立信任并铺平临床翻译。最近,已经引入了许多不确定性估计方法,用于DL医学图像分割任务。开发指标评估和比较不确定性措施的表现将有助于最终用户制定更明智的决策。在本研究中,我们探索并评估在Brats 2019-2020任务期间开发的公制,以对不确定量化量化(Qu-Brats),并旨在评估和排列脑肿瘤多隔室分割的不确定性估计。该公制(1)奖励不确定性估计,对正确断言产生高置信度,以及在不正确的断言处分配低置信水平的估计数,(2)惩罚导致更高百分比的无关正确断言百分比的不确定性措施。我们进一步基准测试由14个独立参与的Qu-Brats 2020的分割不确定性,所有这些都参与了主要的Brats细分任务。总体而言,我们的研究结果证实了不确定性估计提供了分割算法的重要性和互补价值,因此突出了医学图像分析中不确定性量化的需求。我们的评估代码在HTTPS://github.com/ragmeh11/qu-brats公开提供。
translated by 谷歌翻译
高光谱图像(HSI)分类一直是决定的热门话题,因为高光谱图像具有丰富的空间和光谱信息,并为区分不同的土地覆盖物体提供了有力的基础。从深度学习技术的发展中受益,基于深度学习的HSI分类方法已实现了有希望的表现。最近,已经提出了一些用于HSI分类的神经架构搜索(NAS)算法,这将HSI分类的准确性进一步提高到了新的水平。在本文中,NAS和变压器首次合并用于处理HSI分类任务。与以前的工作相比,提出的方法有两个主要差异。首先,我们重新访问了先前的HSI分类NAS方法中设计的搜索空间,并提出了一个新型的混合搜索空间,该搜索空间由空间主导的细胞和频谱主导的单元组成。与以前的工作中提出的搜索空间相比,所提出的混合搜索空间与HSI数据的特征更加一致,即HSIS具有相对较低的空间分辨率和非常高的光谱分辨率。其次,为了进一步提高分类准确性,我们尝试将新兴变压器模块移植到自动设计的卷积神经网络(CNN)上,以将全局信息添加到CNN学到的局部区域的特征中。三个公共HSI数据集的实验结果表明,所提出的方法的性能要比比较方法更好,包括手动设计的网络和基于NAS的HSI分类方法。特别是在最近被捕获的休斯顿大学数据集中,总体准确性提高了近6个百分点。代码可在以下网址获得:https://github.com/cecilia-xue/hyt-nas。
translated by 谷歌翻译
Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several common benchmarks.
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Increasing research interests focus on sequential recommender systems, aiming to model dynamic sequence representation precisely. However, the most commonly used loss function in state-of-the-art sequential recommendation models has essential limitations. To name a few, Bayesian Personalized Ranking (BPR) loss suffers the vanishing gradient problem from numerous negative sampling and predictionbiases; Binary Cross-Entropy (BCE) loss subjects to negative sampling numbers, thereby it is likely to ignore valuable negative examples and reduce the training efficiency; Cross-Entropy (CE) loss only focuses on the last timestamp of the training sequence, which causes low utilization of sequence information and results in inferior user sequence representation. To avoid these limitations, in this paper, we propose to calculate Cumulative Cross-Entropy (CCE) loss over the sequence. CCE is simple and direct, which enjoys the virtues of painless deployment, no negative sampling, and effective and efficient training. We conduct extensive experiments on five benchmark datasets to demonstrate the effectiveness and efficiency of CCE. The results show that employing CCE loss on three state-of-the-art models GRU4Rec, SASRec, and S3-Rec can reach 125.63%, 69.90%, and 33.24% average improvement of full ranking NDCG@5, respectively. Using CCE, the performance curve of the models on the test data increases rapidly with the wall clock time, and is superior to that of other loss functions in almost the whole process of model training.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译